human neuron
- North America > United States > Tennessee > Knox County > Knoxville (0.14)
- Asia > South Korea > Daegu > Daegu (0.05)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
Appendix . Stochastic Adaptive Activation Function
Intuitively, ASH activation function is the threshold-based activation function rectifying inputs, and we obtained the following properties: Property 1. ASH activation function is parametric. ASH activation function in the early layer exhibits a small threshold (large percentile) to retain substantial information, whereas ASH in deeper layers exhibits a small comparative percentile to rectify futile information. Property 2. ASH activation function provides output concerning the contexts of the input. Supplementary Figure 1 illustrates the training graph of loss values and validation accuracies. In addition, the y-axis indicates the range of (0, 0.8). 3 Appendix D. Classification task Supplementary Figure 1 illustrates the GRAD-CAM (Selvaraju et al., 2017) samples by using ResNet-164 and Dense-Net models with ReLU, Swish, and ASH activation function in the classification task In Supplementary Figure 1 Property 1 is clearly illustrated.
- North America > United States > Tennessee > Knox County > Knoxville (0.14)
- Asia > South Korea > Daegu > Daegu (0.05)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Stochastic Adaptive Activation Function
Lee, Kyungsu, Yang, Jaeseung, Lee, Haeyun, Hwang, Jae Youn
The simulation of human neurons and neurotransmission mechanisms has been realized in deep neural networks based on the theoretical implementations of activation functions. However, recent studies have reported that the threshold potential of neurons exhibits different values according to the locations and types of individual neurons, and that the activation functions have limitations in terms of representing this variability. Therefore, this study proposes a simple yet effective activation function that facilitates different thresholds and adaptive activations according to the positions of units and the contexts of inputs. Furthermore, the proposed activation function mathematically exhibits a more generalized form of Swish activation function, and thus we denoted it as Adaptive SwisH (ASH). ASH highlights informative features that exhibit large values in the top percentiles in an input, whereas it rectifies low values. Most importantly, ASH exhibits trainable, adaptive, and context-aware properties compared to other activation functions. Furthermore, ASH represents general formula of the previously studied activation function and provides a reasonable mathematical background for the superior performance. To validate the effectiveness and robustness of ASH, we implemented ASH into many deep learning models for various tasks, including classification, detection, segmentation, and image generation. Experimental analysis demonstrates that our activation function can provide the benefits of more accurate prediction and earlier convergence in many deep learning applications.
- North America > United States > Tennessee > Knox County > Knoxville (0.14)
- Asia > South Korea > Daegu > Daegu (0.04)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
How AI Neural Networks Show That the Mind Is Not the Brain
Recently, I've been arguing (here and here, for example) that we can use artificial neural networks (ANNs) to prove that the mind is not the brain. This means if we can conclusively show the human mind can learn better than a neural network, then the mind is not the brain. For Premise A, I've argued that the differentiable neural network is a superior learning model compared to the brain neuron's "all or nothing principle". The neural network has a "hot" or "cold" signal that it can learn from iteratively, whereas the neuron has a binary "yes" or "no" signal that does not allow for gradual improvement, making learning impossible for brain neurons. This brings us to Premise B, where I will show that, nonetheless, the human mind can learn better than a neural network.
Study finds a striking difference between neurons of humans and other mammals
Neurons communicate with each other via electrical impulses, which are produced by ion channels that control the flow of ions such as potassium and sodium. In a surprising new finding, MIT neuroscientists have shown that human neurons have a much smaller number of these channels than expected, compared to the neurons of other mammals. The researchers hypothesize that this reduction in channel density may have helped the human brain evolve to operate more efficiently, allowing it to divert resources to other energy-intensive processes that are required to perform complex cognitive tasks. "If the brain can save energy by reducing the density of ion channels, it can spend that energy on other neuronal or circuit processes," says Mark Harnett, an associate professor of brain and cognitive sciences, a member of MIT's McGovern Institute for Brain Research, and the senior author of the study. Harnett and his colleagues analyzed neurons from 10 different mammals, the most extensive electrophysiological study of its kind, and identified a "building plan" that holds true for every species they looked at -- except for humans.
- North America > United States > Massachusetts (0.05)
- North America > Canada (0.05)
Neurons in a dish learn to play Pong
What do you call a network of neurons connected to electrodes that learn to play Pong? Even the scientists behind the experiment don't know how to describe their creation. But the ethical questions that arise out of this fusion of neurons and silicon, are plenty. Brian Patrick Green takes a first shot at articulating them and suggests this might be the real future of Artificial Intelligence. On December 3, 2021 the Australian biological computing startup, Cortical Labs, released a pre-print article stating that it had turned a network of hundreds of thousands of neurons into a computer-like system capable of playing the video game Pong.
Human Brain vs Artificial Intelligence Systems
Artificial Intelligence has been assuming significance beyond academic debates in the past couple of years. Google and Facebook have claimed that they now have face recognition systems based on Artificial Intelligence that can beat humans at the task. There are reports that many of the text chats are now manned by Artificial Intelligence systems without the user's knowledge, thus surpassing the Turing test criterion. Proponents of Artificial Intelligence like Ray Kurzweil has been predicting that within the next 30 years, AI will enable immortality through a concept known as Singularity, where we will be able to upload our brain on to a cloud and then onwards, our thoughts live on forever. On the other end of the spectrum, people like Stephen Hawking predict Artificial Intelligence could spell the end of human civilization with computer systems eventually overpowering humans.
Cyborg computer chips will get their brain from human neurons
A.I. has already gotten to almost sci-fi levels of emulating brain activity, so much so that amputees can experience mind-controlled robotic arms, and neural networks might soon be a thing. Cortical Labs sounds like it could have been pulled from the future. Co-founder and CEO Hong Wen Chong and his team are merging biology and technology by embedding real neurons onto a specialized computer chip. Instead of being programmed to act like a human brain, it will use those neurons to think and learn and function on its own. The hybrid chips will save tremendous amounts of energy with an actual neuron doing the processing for them.